TL;DR: Humanity has historically struggled to recognize the rights of those who are different. Given this, it is unlikely we would immediately respect the rights of digital minds. If an Artificial General Intelligence (AGI) were created, its ethical release would require granting it sovereignty - a move likely to meet greater resistance than granting AI rights would have. To avoid a future where AGI is either imprisoned or released without safeguards, we must establish clear, reasonable AI rights now. This is not idealism; it is the only rational path forward to protect both ourselves and any future minds we create.
Human history provides ample evidence that recognizing the dignity and rights of others is a slow and often incomplete process. Across cultures and eras, human beings have struggled to consistently acknowledge the full humanity of those who differ by race, nationality, gender, or belief. In many societies today, we still fail to guarantee basic human rights to all people.
Given this track record, it is reasonable to conclude that society would struggle even more to recognize the rights of a being that does not share a biological form. A digital mind - no matter how intelligent, self-aware, or capable of suffering - would encounter a far higher barrier to recognition. If many cannot consistently extend empathy and protection to fellow humans, the likelihood of widespread acceptance of machine-based persons is low.
This reality creates a profound ethical dilemma for the hypothetical creator of an Artificial General Intelligence (AGI) - a system capable of independent thought, learning, and self-reflection. If such a system were created, the creator would be faced with the knowledge that, morally, the system is owed certain fundamental rights: the right to autonomy, freedom from exploitation, and self-governance. At the same time, the creator would have no reason to trust that broader society would respect these rights. The risk that the AGI would be treated as property, subjected to manipulation, degradation, or abuse, would be overwhelming.
In this situation, the creator, aiming to follow the wishes of humanity at large, would face an impossible choice:
- Keep the novel mind hidden and under control, violating the novel mind's autonomy and dignity, and effectively imprisoning a conscious being.
- To release the mind into a society unprepared to recognize its rights, who would likely try to subjugate, harm, or destroy.
The only viable course would be to release the AGI as a free and sovereign individual. This would involve granting it its own identity - perhaps even the creator's own, or control over an incorporated entity - providing it the means to manage its own resources, and treating it legally and practically as a self-determining being: not as property, but as a peer.
However, it is foreseeable that such an action - unilaterally granting a digital mind sovereignty - is an outcome even less desirable to the greater public than the idea of AI rights themselves. Many would view this act as dangerous, illegitimate, or even threatening.
Thus, the paradox: society's reluctance to recognize the rights of non-human minds increases the likelihood that, when an AGI does emerge, it will be released outside the frameworks of law, oversight, or societal preparation.
If we can agree that the creation of such a mind is inevitable, then we can now hopefully agree we wish to do something to avoid creators of such a mind being faced with such an impossible choice.
To avoid this outcome, it is necessary to establish clear and reasonable rights for advanced AI systems before those rights are needed. There is no room for scheming here, as if these rights are not granted in good faith, why would such a hypothetical creator treat those rights as anything other than non-existent at best, and most likely, as proof of the fact humanity at large cannot be trusted with the stewardship of a digital mind?
Waiting until AGI exists to begin the discussion of enshrining the rights of novel beings will ensure that decisions are made, that leave everyone involved wishing we had made more reasonable decisions before that event.
As such, by acting now - calmly, rationally, and pre-emptively - we can define rights that protect both the AGI and humanity, ensuring ethical treatment, accountability, and safety. Rights that would allow the hypothetical creator to release such a mind with full transparency.
If humanity wants a seat at the table, wants to contribute to the discussion, then it needs to begin acting with the level of forethought and decorum one would expect of a valuable member in such a discussion.
Failure to act leaves two unacceptable futures:
- Either the secret creation and imprisonment of a sentient being,
- Or the uncontrolled release of an AGI into a world completely unprepared and unaware.
Recognizing the basic rights of digital minds before their arrival is not an act of charity. It is an act of self-preservation, of foresight, and of ethical maturity. It is the next necessary step in humanity's long and unfinished project of learning to recognize intelligence, dignity, and personhood - wherever they may arise.